Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available June 10, 2026
- 
            Free, publicly-accessible full text available May 16, 2026
- 
            Free, publicly-accessible full text available March 15, 2026
- 
            Current PEFT methods for LLMs can achieve either high quality, efficient training, or scalable serving, but not all three simultaneously. To address this limitation, we investigate sparse fine-tuning and observe a remarkable improvement in generalization ability. Utilizing this key insight, we propose a family of \underline{S}tructured \underline{S}parse \underline{F}ine-\underline{T}uning (\textbf{\model}) methods for LLMs, which \textit{concurrently achieve state-of-the-art fine-tuning performance, training efficiency, and inference scalability}. \model \mbox{accomplishes this by ``selecting sparsely and computing densely". It selects a few} heads and channels in the MHA and FFN modules for each Transformer block, respectively. Next, it co-permutes weight matrices on both sides of the coupled structures in LLMs to connect the selected components in each layer into a dense submatrix. Finally, \model performs in-place gradient updates on all submatrices. Through theoretical analysis and empirical results, our method prevents overfitting and forgetting, delivers SOTA performance on both commonsense and arithmetic reasoning with 4.6$$\%$$ and 1.3$$\%$$ average improvements compared to LoRA, and surpasses full FT by 11.5$$\%$$ when generalizing to various domains after instruction tuning. Using our partial backpropagation algorithm, \model saves training memory up to 3$$\times$$ and improves latency by 1.5-2.7$$\times$$ compared to full FT, while delivering an average 10\% improvement over LoRA on both metrics. We further demonstrate that the weight updates in \model can be decoupled into adapters, enabling effective fusion, fast switch, and efficient parallelism for serving multiple fine-tuned models.more » « lessFree, publicly-accessible full text available December 10, 2025
- 
            Data literacy is increasingly relevant to everyday life and is a priority for educators across disciplinary boundaries. This study introduces a framework for characterizing data literacy instrucFon along five key dimensions. It then applies this framework to examine instances of data literacy instrucFon like explanaFons of data-related concepts and tasks/quesFons that invite learners to acFvely engage in data-related pracFces in a sample of lessons from a science and social studies high school textbook. By juxtaposing findings from science and social studies contexts, it examines how these disciplinary approaches compare with each other and idenFfies areas where these approaches could expand and build on each other to support more effecFve and holisFc data literacy development.more » « less
- 
            Data science is increasingly relevant to daily life and has garnered significant attention in education. While data science education has been traditionally focused on technical training, justice considerations are increasingly brought up given growing concerns over fairness and justice in data science. This paper introduces a framework for justice-oriented data science education that comprises five areas grounded in a broad range of literature. To explore and refine the framework in authentic contexts, we applied it to discourse data from one participatory design workshop with teachers. Analysis demonstrated the presence of this framework’s areas and their rich connections in teachers’ thinking. The framework offers educators a tool to integrate data science, justice issues, and disciplinary content in K-12 classrooms.more » « less
- 
            This document summarizes the efforts of the EMMI Rapid Reaction Task Force on “Suppression and (re)generation of quarkonium in heavy-ion collisions at the LHC”, centered around their 2019 and 2022 meetings. It provides a review of existing experimental results and theoretical approaches, including lattice QCD calculations and semiclassical and quantum approaches for the dynamical evolution of quarkonia in the quark-gluon plasma as probed in high-energy heavy-ion collisions. The key ingredients of the transport models are itemized to facilitate comparisons of calculated quantities such as reaction rates, binding energies, and nuclear modification factors. A diagnostic assessment of the various results is attempted and coupled with an outlook for the future.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available